130 research outputs found

    Heterogeneous concurrent computing with exportable services

    Get PDF
    Heterogeneous concurrent computing, based on the traditional process-oriented model, is approaching its functionality and performance limits. An alternative paradigm, based on the concept of services, supporting data driven computation, and built on a lightweight process infrastructure, is proposed to enhance the functional capabilities and the operational efficiency of heterogeneous network-based concurrent computing. TPVM is an experimental prototype system supporting exportable services, thread-based computation, and remote memory operations that is built as an extension of and an enhancement to the PVM concurrent computing system. TPVM offers a significantly different computing paradigm for network-based computing, while maintaining a close resemblance to the conventional PVM model in the interest of compatibility and ease of transition Preliminary experiences have demonstrated that the TPVM framework presents a natural yet powerful concurrent programming interface, while being capable of delivering performance improvements of upto thirty percent

    Security and Privacy Dimensions in Next Generation DDDAS/Infosymbiotic Systems: A Position Paper

    Get PDF
    AbstractThe omnipresent pervasiveness of personal devices will expand the applicability of the Dynamic Data Driven Application Systems (DDDAS) paradigm in innumerable ways. While every single smartphone or wearable device is potentially a sensor with powerful computing and data capabilities, privacy and security in the context of human participants must be addressed to leverage the infinite possibilities of dynamic data driven application systems. We propose a security and privacy preserving framework for next generation systems that harness the full power of the DDDAS paradigm while (1) ensuring provable privacy guarantees for sensitive data; (2) enabling field-level, intermediate, and central hierarchical feedback-driven analysis for both data volume mitigation and security; and (3) intrinsically addressing uncertainty caused either by measurement error or security-driven data perturbation. These thrusts will form the foundation for secure and private deployments of large scale hybrid participant-sensor DDDAS systems of the future

    The HARNESS Workbench: Unified and Adaptive Access to Diverse HPC Platforms

    Get PDF
    The primary goal of the Harness WorkBench (HWB) project is to investigate innovative software environments that will help enhance the overall productivity of applications science on diverse HPC platforms. Two complementary frameworks were designed: one, a virtualized command toolkit for application building, deployment, and execution, that provides a common view across diverse HPC systems, in particular the DOE leadership computing platforms (Cray, IBM, SGI, and clusters); and two, a unified runtime environment that consolidates access to runtime services via an adaptive framework for execution-time and post processing activities. A prototype of the first was developed based on the concept of a 'system-call virtual machine' (SCVM), to enhance portability of the HPC application deployment process across heterogeneous high-end machines. The SCVM approach to portable builds is based on the insertion of toolkit-interpretable directives into original application build scripts. Modifications resulting from these directives preserve the semantics of the original build instruction flow. The execution of the build script is controlled by our toolkit that intercepts build script commands in a manner transparent to the end-user. We have applied this approach to a scientific production code (Gamess-US) on the Cray-XT5 machine. The second facet, termed Unibus, aims to facilitate provisioning and aggregation of multifaceted resources from resource providers and end-users perspectives. To achieve that, Unibus proposes a Capability Model and mediators (resource drivers) to virtualize access to diverse resources, and soft and successive conditioning to enable automatic and user-transparent resource provisioning. A proof of concept implementation has demonstrated the viability of this approach on high end machines, grid systems and computing clouds

    Publishing H2O pluglets in UDDI registries

    Get PDF
    Interoperability and standards, such as Grid Services are a focus of current Grid research. The intent is to facilitate resource virtualization, and to accommodate the intrinsic heterogeneity of resources in distributed environments. It is important that new and emerging metacomputing frameworks conform to these standards, in order to ensure interoperability with other grid solutions. In particular, the H2O metacomputing system offers several benefits, including lightweight operation, user-configurability, and selectable security levels. Its applicability would be enhanced even further through support for grid services and OGSA compliance. Code deployed into the H2O execution containers is referred to as pluglets. These pluglets constitute the end points of services in H2O, services that are to be made known through publication in a registry. In this contribution, we discuss a system pluglet, referred to as OGSAPluglet, that scans H2O execution containers for available services and publishes them into one or more UDDI registries. We also discuss in detail the algorithms that manage the publication of the appropriate WSDL and GSDL documents for the registration process

    Eliciting the End-to-End Behavior of SOA Applications in Clouds

    Get PDF
    Availability and performance are key issues in SOA cloud applications. Those applications can be represented as a graph spanning multiple Cloud and on-premises environments, forming a very complex computing system that supports increasing numbers and types of users, business transactions, and usage scenarios. In order to rapidly find, predict, and proactively prevent root causes of issues, such as performance degradations and runtime errors, we developed a monitoring solution which is able to elicit the end-to-end behavior of those applications. We insert lightweight components into SOA frameworks and clients thereby keeping the monitoring impact minimal. Monitoring data collected from call chains is used to assist in issues related to performance, errors and alerts, as well as business and IT transactions

    Detection of Distributed Attacks in Hybrid & Public Cloud Networks

    No full text
    International audienceIn this paper early detection of distributed attacks are discussed that are launched from multiple sites of the hybrid & public cloud networks. A prototype of Cloud Distributed Intrusion Detection System (CDIDS) is discussed with some basic experiments. The summation of security alerts has been applied which helps to detect distributed attacks while keeping the false positive at the minimum. Using the summation of security alerts mechanism the attacks that have slow iteration rate are detected at an early stage. The objective of our work is to propose a Security Management System (SMS) that can detect malicious activities as early as possible and camouflaging of attacks under the conditions when other security management systems become unstable due to intense events of attacks

    Heterogeneous Network Computing: The Next Generation

    No full text
    this paper, we discuss some selected aspects of heterogeneous computing in the context of the PVM system, and describe evolutionary enhancements to the system. These extensions, which involve performance optimization, light-weight processes, and client-server computing, suggest useful directions that the next generation of heterogeneous systems might follow. A prototype design of such a next-generation heterogeneous computing framework is also discussed. 1 Introduction Parallel computing methodologies using clusters of heterogeneous systems have demonstrated their viability in the past several years, both for highperformance scientific computing as well as for more "general purpose" applications. This approach to concurrent computation is based on the premise that a collection of independent computer systems, interconnected by networks, can be transformed into a coherent, powerful, and cost-effective concurrent computing resource, through the use of software frameworks. The most common methodology for realizing such a mode of computing is exemplified by PVM (Parallel Virtual Machine) -- a software framework that emulates a generalized distributed memory multiprocessor in heterogeneous networked environments
    corecore